ctrl+shift+p filters: :st2 :st3 :win :osx :linux
Browse

Open​AI completion

First class Sublime Text AI assistant with GPT-4o and ollama support!

Details

Installs

  • Total 4K
  • Win 2K
  • Mac 2K
  • Linux 749
Sep 10 Sep 9 Sep 8 Sep 7 Sep 6 Sep 5 Sep 4 Sep 3 Sep 2 Sep 1 Aug 31 Aug 30 Aug 29 Aug 28 Aug 27 Aug 26 Aug 25 Aug 24 Aug 23 Aug 22 Aug 21 Aug 20 Aug 19 Aug 18 Aug 17 Aug 16 Aug 15 Aug 14 Aug 13 Aug 12 Aug 11 Aug 10 Aug 9 Aug 8 Aug 7 Aug 6 Aug 5 Aug 4 Aug 3 Aug 2 Aug 1 Jul 31 Jul 30 Jul 29 Jul 28 Jul 27
Windows 1 3 3 1 2 5 2 7 2 4 4 3 6 8 16 7 4 3 0 4 3 3 5 5 3 4 1 4 3 6 2 3 1 5 4 1 6 2 1 4 6 4 2 2 2 3
Mac 0 3 1 2 1 2 1 5 5 7 5 3 6 8 6 3 4 5 2 2 6 6 2 0 1 5 5 1 2 2 2 5 5 4 3 6 3 1 1 2 4 3 2 5 3 4
Linux 0 5 0 1 1 1 4 4 2 0 1 1 1 8 6 5 2 1 2 1 1 4 1 0 1 4 0 3 2 3 2 1 3 2 0 2 2 2 2 2 1 2 1 1 0 1

Readme

Source
raw.​githubusercontent.​com

Star on GitHub

OpenAI Sublime Text Plugin

tldr;

OpenAI Completion is a Sublime Text plugin that uses LLM models to provide first class code assistant support within the editor.

It's not locked with just OpenAI anymore. llama.cpp server and ollama supported as well.

[!NOTE] I think this plugin is in its finite state. Meaning there's no further development of it I have in plans. I still have plans to fix bugs and review PR if any, but those tons of little enhancement that could be applied here to fix minor issues and roughness and there likely never would.

What I do have in plans is to implement ST front end for plandex tool based on some parts of this plugin codebase, to get (and to bring) a fancy and powerful agentish capabilities to ST ecosystem. So stay tuned.

Features

  • Code manipulation (append, insert and edit) selected code with OpenAI models.
  • Chat mode powered by whatever model you'd like.
  • GPT-4 support.
  • llama.cpp's server, Ollama and all the rest OpenAI'ish API compatible.
  • Dedicated chats histories and assistant settings for a projects.
  • Ability to send whole files or their parts as a context expanding.
  • Markdown syntax with code languages syntax highlight (Chat mode only).
  • Server Side Streaming (SSE) (i.e. you don't have to wait for ages till GPT-4 print out something).
  • Status bar various info: model name, mode, sent/received tokens.
  • Proxy support.

ChatGPT completion demo

https://github.com/yaroslavyaroslav/OpenAI-sublime-text/assets/16612247/37b98cc2-e9cd-46a6-ac5d-03845313096b

video sped up to 1.7x


https://github.com/yaroslavyaroslav/OpenAI-sublime-text/assets/16612247/69f609f3-336d-48e8-a574-3cb7fda5822c

video sped up to 1.7x

Requirements

  • Sublime Text 4
  • llama.cpp, ollama installed OR
  • Remote llm service provider API key, e.g. OpenAI

Installation

  1. Install the Sublime Text Package Control plugin if you haven't done this before.
  2. Open the command palette and type Package Control: Install Package.
  3. Type OpenAI and press Enter.

Usage

AI Assistance use case

ChatGPT mode works the following way:

  1. Select some text or even the whole tabs to include them in request
  2. Run either OpenAI: Chat Model Select or OpenAI: Chat Model Select With Tabs commands.
  3. Input a request in input window if any.
  4. The model will print a response in output panel by default, but you can switch that to a separate tab with OpenAI: Open in Tab.
  5. To get an existing chat in a new window run OpenAI: Refresh Chat.
  6. To reset history OpenAI: Reset Chat History command to rescue.

[!NOTE] You suggested to bind at least OpenAI: New Message, OpenAI: Chat Model Select and OpenAI: Show output panel in sake for convenience, you can do that in plugin settings.

Chat history management

You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings:

{   
    "settings": {
        "ai_assistant": {
            "cache_prefix": "your_name_project"
        }
    }
}

Additional request context management

You can add a few things to your request: - multi-line selection within a single file - multiple files within a single View Group

To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line).

To send the whole file(s) in advance to request you should super+button1 on them to make all tabs of them to become visible in a single view group and then run [New Message|Chat Model] with Sheets command as shown on the screen below. Pay attention, that in given example only README.md and 4.0.0.md will be sent to a server, but not a content of the AI chat.

[!NOTE] It's also doesn't matter whether the file persists on a disc or it's just a virtual buffer with a text in it, if they're selected, their content will be send either way.

Image handling

Image handle can be called by OpenAI: Handle Image command.

It expects an absolute path of image to be selected in a buffer on the command call (smth like /Users/username/Documents/Project/image.png). In addition command can be passed by input panel to proceed the image with special treatment. png and jpg images are only supported.

[!WARNING] Userflow don't expects that image url would be passed by that input panel input, it has to be selected in buffer. I'm aware about the UX quality of this design decision, but yet I'm too lazy to develop it further to some better state.

In buffer llm use case

  1. You can pick one of the following modes: append, replace, insert. They're quite self-descriptive. They should be set up in assistant settings to take effect.
  2. Select some text (they're useless otherwise) to manipulate with and hit OpenAI: New Message.
  3. The plugin will response accordingly with appending, replacing or inserting some text.

[!IMPORTANT] Yet this is a standalone mode, i.e. an existing chat history won't be sent to a server on a run.

[!NOTE] A more detailed manual, including various assistant configuration examples, can be found within the plugin settings.

Other features

Open Source models support (llama.cpp, ollama)

  1. Replace "url" setting of a given model to point to whatever host you're server running on (e.g."http://localhost:8080").
  2. [Optional] Provide a "token" if your provider required one.
  3. Tweak "chat_model" to a model of your choice and you're set.

[!NOTE] You can set both url and token either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.

Settings

The OpenAI Completion plugin has a settings file where you can set your OpenAI API key. This is required for the most of providers to work. To set your API key, open the settings within Preferences -> Package Settings -> OpenAI -> Settings and paste your API key in the token property, as follows:

{
    "token": "sk-your-token",
}

ollama setup specific

If you're here it meaning that a model that you're using with ollama talking shit. This is because temperature property of a model which is 1 somewhat doubles on ollama's side, so it becomes 2, which is a little bit too much for a good model's response. So you to make things work you have to set temperature to 1.

Key bindings

You can bind keys for a given plugin command in Preferences -> Package Settings -> OpenAI -> Key Bindings. For example you can bind “New Message” command like this:

{
    "keys": [ "super+k", "super+'" ],
    "command": "openai",
    "args": { "mode": "chat_completion" }
},

[Multi]Markdown syntax with syntax highlight support

It just works.

[!IMPORTANT] It's highly recommended to install the MultimarkdownEditing to apply broader set of languages with syntax highlighting.

Proxy support

You can setup it up by overriding the proxy property in the OpenAI completion settings like follow:

"proxy": {
    "address": "127.0.0.1", // required
    "port": 9898, // required
    "username": "account",
    "password": "sOmEpAsSwOrD"
}

Disclaimers

[!WARNING] All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so.

[!NOTE] This one was initially written at 80% written by a GPT3.5 back then. I was here mostly for debugging purposes, rather than digging ST API. This is a pure magic, I swear!